Hello, all. I’m happy to introduce my research blog. This is where I will write about my research projects and other related stuff that I find interesting.
Hi, again. This is my first blog post. The idea here is to show a little of my research is a different way to the traditional papers that we read. I hope you can enjoy reading it as I did when I wrote this.
Recently I started to be interested in visualising evolutionary algorithms (EAs). Here, I will summarise some fascinating works that helped me create an interest in the field of study.
The main idea of visualising EAs is to show efficient ways of communicating the behaviour of such algorithms. What is remarkable about looking at the images is that we can look at them, which feels a little less abstract. I’m sure you saw some of these tools, and one of the most common is the fitness landscapes.
In a population-based algorithm:
In a gradient descent algorithm (commonly used in Neural Networks and other ML algorithms):
In case this is your first time hearing about the fitness landscape, let me give a summary of what they are:
Are you interested? Take a look at this Wikipedia page: https://en.wikipedia.org/wiki/Evolutionary_landscape.
Or watch this YouTube video: https://www.youtube.com/watch?v=4pdiAneMMhU
That’s super, isn’t it? These methods were developed for singleobjective algorithms, and we don’t have yet many tools to visualize multiobjective EAs. Having tools like this can help us to create new and (hopefully) better algorithms. That is because we can see how the process changes over time, and then we an study the strengths and weaknesses of algorithms. And by knowing these strengths and weaknesses we can improve existing algorithms or even create new and better EAs.
(Disclaimer: Is new always better? I love this meme, so I couldn’t resist adding it here, but new doesn’t mean better. Btw, what does it mean to be better?)
Here I will discuss visualisation methods for the multiobjective domain that I have already successfully used.
Analysing algorithms using the final approximation provides limited information related to these algorithms’ performance since these EAS should return a suitable set of solutions at any time during the search [1,2,3,4]. That is, looking at the whole process, not only at the end, provides more insightful information.
This image shows the Anytime HV (higher is better, shaded areas indicate the standard deviation) on UF10. The performance of MOEA/D-PS is shown as the red circles, MOEA/D with population size 500 is shown as the green triangles, and MOEA/D with population size equals 50 is shown as the blue squares, on UF10. The anytime performance of MOEA/D-PS is similar to the anytime performance of MOEA/D with a small population. We can see that the three variants have almost the same performance at the end of the search. This image and text are from [5].
See? The 3 algorithms have the same final performance, but how they got there was different.
The empirical attainment function (EAF) allows the examination of the solution many sets of different runs of an algorithm. It can illustrate where and by how much the outcomes of the two algorithms differ in the objective space [6]. The EAF is based on the attainment surface and resents the probability that an arbitrary objective region in the objective space is attained (dominated or equal) by an algorithm, and probability can be estimated using data collected from several independent runs such algorithm. The attained surface separates the objective space into two regions: (1) where the objective space is dominated (attained) by solutions of many sets and (2) where the objective space is not dominated by those solutions [7,8]. For example, the median attainment surface shows regions dominated by at least half of the runs [6].